Towards Reasoning Pragmatics

نویسنده

  • Pascal Hitzler
چکیده

The realization of Semantic Web reasoning is central to substantiating the Semantic Web vision. However, current mainstream research on this topic faces serious challenges, which force us to question established lines of research and to rethink the underlying approaches. 1 What is Semantic Web Reasoning? The ability to combine data, mediated by metadata, in order to derive knowledge which is only implicitly present, is central to the Semantic Web idea. This process of accessing implicit knowledge is commonly called reasoning, and formal modeltheoretic semantics tells us exactly what knowledge is implicit in the data. Let us attempt to define reasoning in rather general terms: Reasoning is about arriving at the exact answer(s) to a given query. Formulated in this generality, this encompasses many situations which would classically not be considered reasoning – but it will suffice for our purposes. Note that the definition implicitly assumes that there is an exact answer. In a reasoning context, such an exact answer would normally be defined by a model-theoretic semantics. Current approaches to Semantic Web reasoning, however, which are mainly based on calculi drawn from predicate logic proof theory, face several serious obstacles. – Scalability of algorithms and systems has been improving drastically, but systems are still incapable of dealing with amounts of data on the order of magnitude as can be expected on the World Wide Web. This is aggrevated by the fact that classical proof theory does not readily allow for parallelization, and that the amount of data present on the web increases with a similar growth rate as the efficiency of hardware. – Realistic data, in particular on the web, is generally noisy. Established prooftheoretic approaches (even those including uncertainty or probabilistic methods) are unable to cope with this kind of data in a manner which is ready for large-scale applications. 1 It is rather peculiar that a considerable proportion of so-called Semantic Web research and publications ignores formal semantics. Even most textbooks fail to explain it properly. An exception is [7]. 2 Simply referring to a formal semantics is too vague, since this would also include procedural semantics, i.e. non-declarative approaches, and thus would include most mainstream programming languages. – It is a huge engineering effort to create web data and ontologies which are of sufficiently high quality for current reasoning approaches, and usually beyond the abilities of application developers. The resulting knowledge bases are furthermore severely limited in terms of reusability for other application contexts. The state of the art shows no indications that approaches based on logical proof theory would overcome these obstacles anytime soon in such a way that large-scale applications on the web can be realized. Since reasoning is central to the Semantic Web vision, we are forced to rethink our traditional methods, and should be prepared to tread new paths. A key idea to this effect, voiced by several researchers (see e.g. [3, 23]) is to explore alternative methods for reasoning. These may still be based more or less closely on proof-theoretic considerations, or they may not. They could, e.g., utilize methods from statistical machine learning or from nature-inspired computing. Researchers who are used to thinking in classical proof-theoretic terms are likely to object to this thought, arguing that a relaxation of strict proof-theoretic requirements on algorithms, such as soundness and completeness, would pave the way for arbitrary algorithms which do not perform logical reasoning at all, and thus would fail to adhere to the specification provided by the formal semantics underlying the data – and thus jeopardize the Semantic Web vision. While such arguments have some virtue, it needs to be stressed that the nature of the underlying algorithm is, effectively, unimportant, as long as the system adheres to the specification, i.e. to the formal semantics. Imagine, as a thought experiment, a black box system which performs sound and complete reasoning in all application settings it is made for – or at least up to the extent to which standard reasoning systems are sound and complete. Does it matter then whether the underlying algorithm is provably sound and complete? I guess not. The only important thing is that its performance is sound and complete. If the black box were orders of magnitude faster then conventional reasoners, but somebody would tell you that it is based on statistical methods, which one would you choose to work with? Obviously, the answer depends on the application scenario – if you’d like to manage a bank account, you may want to stick with the proof-theoretic approach since you can prove formally that the algorithm does what it should; but if you use the algorithm for web search, the quicker algorithm might be the better choice. Also, your choice will likely depend on the evidence given as to the correctness of the black box algorithm in application settings. This last thought is important: If a reasoning system is not based on proof theory, then there must be a quality measure for the system, i.e., the system must 3 Usually, they are not sound and complete, although they are based on underlying algorithms which are, theoretically, sound and complete. Incompleteness comes from the fact that resources, including time, are limited. Unsoundness comes from bugs in the system. be evaluated against the gold standard, which is given by the formal semantics, or equivalently by the provably sound and complete implementations [23]. If we bring noisy data, as on the web, into the picture, it becomes even clearer why a fixation on soundness and completeness of reasoning systems is counterproductive for the Semantic Web: In the presence of such data, even the formal model-theoretic semantics breaks down, and it is quite unclear how to develop algorithms based on proof theory for such data. The notions of soundness and completeness of reasoning in the classical sense appear to be almost meaningless. But only almost, since alternative reasoning systems which are able to cope with noisy data can still be evaluated against the gold standard on non-noisy data, for quality assurance. In the following, we revisit the role of soundness and completeness for reasoning, and argue further for an alternative perspective on these issues (Section 2). We also discuss key challenges which need to be addressed in order to realise reasoning on and for the Semantic Web, in particular the questions of expressivity of ontology languages (Section 3), roads to bootstrapping (Section 4), knowledge acquisition (Section 5), and user interfacing (Section6). We conclude in Section 7. 2 The Role of Soundness, Completeness, and Computational Complexity Computational complexity has classically been a consideration for the development of description logics, which underlie the Web Ontology Language OWL – which is currently the most prominent ontology language for Semantic Web reasoning. In particular, OWL is a decidable logic. The currently ongoing revision OWL 2 [6] furthermore explicitly defines fragments, called profiles, with lower (in fact, polynomial) computational complexity. Soundness and completeness are central properties of classical reasoning algorithms for logic-based knowledge representation languages, and are thus central notions in the development of Semantic Web reasoning around OWL. However performance issues have prompted researchers to advocate approximate reasoning for the Semantic Web (see e.g. [3, 23]). Arguing for this approach provokes radically different kinds of reactions: some logicians appear to be abhorred by the mere thought, while many application developers find it the most natural thing to do. Often it turns out that the opposing factions misunderstand the arguments: counterarguments usually state that leaving the model-theoretic semantics behind would lead to arbitrariness and thus loss of quality. So let it be stated again explicitly: approximate reasoning shall not replace sound and complete reasoning in the sense that the latter would no longer be needed. Quite in contrast, approximate reasoning in fact needs the sound and complete approaches as a gold standard for evaluation and quality assurance. The following shall help to make this relationship clear. 2.1 Sound but Incomplete Reasoning There appears to be not much argument against this in the Semantic Web commmunity, even from logicians: they are used to this, since some KR languages, including first-order predicate logic, are only semi-decidable, i.e. completeness can only be achieved with unlimited time resources anyway. For decidable languages, however, a sound but incomplete reasoner should always be evaluated against the gold standard, i.e., against a sound and complete reasoner, in order to show the amount of incompleteness incurred versus the gain in efficiency. Interestingly, this is rarely done in a structured way, which is, in my opinion, a serious neglect. A statistical framework for evaluation against the gold standard is presented in [23]. 2.2 Unsound but Complete Reasoning Allowing for reasoning algorithms to be unsound appears to be much more controversial, and the usefulness of this concept seems to be harder to grasp. However, there are obvious examples. Consider, e.g., fault-detection in a power plant in case of an emergency: The system shall determine (quickly!) which parts of the factory need to be shut down. Obviously, it is of highest importantance that the critical part is contained in the shutdown, while it is less of a problem if too many other parts are shut down, too. Another obvious example is semantic search: In most cases, users would prefer to get a quick set of replies, among which the correct one can be found, rather than wait longer for one exact answer. Furthermore, sound-incomplete and unsound-complete systems can sometimes be teamed up and work in parallel to provide better overall performance (see e.g. [24]). 2.3 Unsound and Incomplete Reasoning Following the above arguments to their logical conclusion, it should become clear why unsound and incomplete reasoning has its place among applications. Remember that there is the gold standard against which such systems should be evaluated. And obviously there is no reason to stray from a sound and complete approach if the knowledge base is small enough to allow for it. The most prominent historic example for an unsound and incomplete yet very successful reasoning system is Prolog. Traditionally, the unification algorithm, which is part of the SLD-resolution proof procedure used in Prolog [13], is used without the so-called occurs check, which, depending on the exact implementation, can cause unsoundness [16]. This omission was made due to reasons 4 Some non-montonic logics are not even semi-decidable. 5 This example is due to Frank van Harmelen, personal communication. 6 To obtain a wrong answer, execute the query ?-p(a). on the logic program consisting of the two clauses p(a) :q(X,X). and q(X,f(X))., e.g. under SWI-Prolog. – The example is due to Markus Krötzsch. of efficiency, and turned out to be feasible since it rarely causes a problem for Prolog programmers. Likewise, it is not unreasonable to expect that carefully engineered unsound and incomplete reasoning approaches can be useful on the Semantic Web, in particular when sound and complete systems fail to provide results within a reasonable time span. Furthermore, there is nothing wrong with using entirely alternative approaches to this kind of reasoning, e.g., approaches which are not based on proof theory. To give an example of the latter, we refer to [2], where the authors use a statistical learning approach using support vector machines. They train their machine to infer class membership in ALC, which is a description logic related to OWL, and achieve a 90% coverage. Note that this is done without any proof theory, other than to obtain the training examples. In effect, their system learns to reason with high coverage without performing logical deduction in the prooftheoretic sense. For a statistical framework for evaluation against the gold standard we refer again to [23]. 2.4 Computational Complexity and Decidability Considerations on computational complexity and decidability have been driving research around description logics, which underlie OWL, from the beginning. At the same time, there are more and more critical voices concerning the fixation of that research on these issues, since it is not quite clear how practical systems would benefit from these. Indeed, theoretical (worst-case) computational complexity is hardly a relevant measure for the performance of real systems. At the same time, decidability is only guaranteed assuming bug-free implementations– which is an unrealistic assumption –, and given enough resources – which is also unrealistic since the underlying algorithms often require exponential time in the worst case. The misconception underlying these objections is that computational complexity and decidability are not practial measures which have a direct meaning in application contexts. They are rather a priori measures for language and algorithm development, and the recent history of OWL language development indicates that these a priori measures have indeed done a good job. It is obviously better to have such theoretical means for the conceptual work in creating language features, than to have no measures at all. And indeed this has worked out well, since e.g. reasoning systems based on realistic OWL knowledge bases currently seem to behave rather well despite the high worst-case computational complexity. Taking the perspective of approximate reasoning algorithms as laid out earlier, it is actually a decisively positive feature of Semantic Web knowledge representation languages that systems exist which can serve as a gold standard reference. Considering the difficulties in other disciplines (like Information Retrieval) in creating gold standards, we indeed are delivered the gold standard on a silver plate. We can use this to an advantage. 3 Diverse Knowledge Representation Issues Within 50 years of KR research, many issues related to the representation of nonclassical knowledge have been investigated. Many of the research results obtained in this realm are currently carried over to ontology languages around OWL, including abductive reasoning, uncertainty handling, inconsistency handling and paraconsistent reasoning, closed world reasoning and non-monotonicity, belief revision, etc. However all these approaches face the same problems that OWL reasoning faces, foremost scalability and the dealing with realistic noisy data. Indeed under most of these approaches, runtime performance becomes worse, since the reasoning problems generally become harder in terms of computational complexity. Nevertheless, research on logical foundations of these knowledge representation issues, as currently being carried out, is needed to establish the gold standard. At this time there is a certain neglect, however, in combining several paradigms, e.g. it is quite unclear how to marry paraconsistent reasoning with uncertainty handling. Research into enhancing expressivity of ontology languages can roughly be divided into the following. – Classical logic features: This line of research follows the well-trodden path of extending e.g. OWL with further expressive features, while attempting to retain decidability and in some cases low computational complexity. Some concrete suggestions for next steps in this direction are given in the appendix. – Extralogical features: These include datatypes and additional datastructures, like e.g. Description Graphs [19]. – Supraclassical logic: Logical features related to commonsense reasoning like abduction and explanations (e.g., [8]), paraconsistency (e.g., [15]), belief revision, closed-world (e.g., [4]), uncertainty handling (e.g., [10, 14]), etc. There is hardly any work investigating approximate reasoning solutions for supraclassical logics. Investigations into these issues should first establish the gold standard following sound logical principles including computational complexity issues. Only then should extensions towards approximate reasoning be done. 4 Bootstrapping Reasoning How to get from A (today) to B (reasoning that works on the Semantic Web)? I believe that a promising approach lies in bootstrapping existing applications which use little or no reasoning, based e.g. on RDF. The idea is to enhance 7 I’m personally critical about fuzzy logic and probabilistic logic approaches in practice for Semantic Web issues. Dealing with noisy data on the web does not seem to easily fall in the fuzzy or probabilistic category. So probably new ideas are needed for these. these applications very carefully with a bit more reasoning, in order to clearly understand the added value and the difficulties one is facing when doing this. A (very) generic workflow for the bootstrapping may be as follows. 1. Identify an (RDF) application where enhanced reasoning would bring added value. 2. Identify ontology language constructs which would be needed for expressing the knowledge needed for the added value. 3. Identify an ontology language (an OWL profile or an OWL+Rules hybrid) which covers these additional language constructs. 4. Find a suitable reasoner for the enhanced language. 5. Enhance the knowledge base and plug the software components together. The point of these exercises is not only to show that more reasoning capabilities bring added value, but also to identify obstacles in the bootstrapping process. 5 Overcoming The Ontology Acquisition Bottleneck The ontology acquisition bottleneck for logically expressive ontologies comes partly from the fact that sound and complete reasoning algorithms work only on carefully devised ontologies, and in many cases it needs an ontology expert to develop them. Creating such high-quality ontologies is very costly. A partial solution to this problem is related to (1) noise handling and (2) the bootstrapping idea. With the current fixation on sound and complete reasoning it cannot be expected that usable ontologies (in the sound and complete sense) will appear in large quantities e.g. on the web. However, it is conceivable that e.g. Linked Open Data (LoD) could be augmented with more expressive schema data to allow e.g. for reasoning-based semantic search. The resulting extended LoD cloud would still be noisy and not readily usable with sound and complete approaches. So reasoning approaches which can handle noise are needed. This is also in line with the bootstrapping idea: We already have a lot of metadata available, and in order to proceed we need to make efforts to enhance this data, and to find robust reasoning techniques which can deal with this real-world noisy data.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The pragmatic circle

Classical Gricean pragmatics is usually conceived as dealing with far-side pragmatics, aimed at computing implicatures. It involves reasoning about why what was said, was said. Near-side pragmatics, on the other hand, is pragmatics in the service of determining, together with the semantical properties of the words used, what was said. But this raises the specter of 'the pragmatic circle.' If Gr...

متن کامل

Pragmatic Reasoning. Pragmatic Semantics and Semantic Pragmatics

This paper is concerned with the conceptual foundations of pragmatic reasoning (context-dependent reasoning). A general pragmatic semantics (a semantic analysis which includes a pragmatic parameter) is given for pragmatic reasoning, the logical properties of the various forms of pragmatic reasoning are discussed, as are many examples. The semantic pragmatics (the general study of choosing or in...

متن کامل

Towards Pragmatics-based Machine Translation

We propose a program of research which has as its goal establishing a framework and methodology for investigating the pragmatic aspects of the translation process and implementing a computational platform for carrying out systematic experiments on the pragmatics of translation. The program has four components. First, on the basis of a comparative study of multiple translations of the same docum...

متن کامل

The Pragmatics of Spatial Language

How do people understand the pragmatics of spatial language? We propose a rational-speech act model for spatial reasoning, and apply it to the terms ‘in’ and ‘near’. We examine people’s fine-grain spatial reasoning in this domain by having them locate where an event occurred, given an utterance. Our pragmatic listener model provides a quantitative and qualitative fit to people’s inferences.

متن کامل

The Relationship between Causal and Counterfactual Reasoning

In this paper it is claimed that counterfactual reasoning in contextualized situations depends on and reflects causal contingencies, which are actualized depending on the task demand. The experiments presented manipulated some elements of the pragmatics of a task to show cases where dissociation between causal and counterfactual reasoning does or does not occur. Based on this evidence, it is cl...

متن کامل

Pragmatics Through Context Management

Pragmatically based dialogue management requires flexible and efficient representation of contextual information. The approach described in this paper uses logical knowledge bases to represent contextual information and special abductive reasoning tools to manage these knowledge bases. One of the advantages of such a reasoning based approach to computational dialogue pragmatics is that the same...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009